The physics-informed neural operator (PINO) is a machine learning architecture that has shown promising empirical results for learning partial differential equations. PINO uses the Fourier neural operator (FNO) architecture to overcome the optimization challenges often faced by physics-informed neural networks. Since the convolution operator in PINO uses the Fourier series representation, its gradient can be computed exactly on the Fourier space. While Fourier series cannot represent nonperiodic functions, PINO and FNO still have the expressivity to learn nonperiodic problems with Fourier extension via padding. However, computing the Fourier extension in the physics-informed optimization requires solving an ill-conditioned system, resulting in inaccurate derivatives which prevent effective optimization. In this work, we present an architecture that leverages Fourier continuation (FC) to apply the exact gradient method to PINO for nonperiodic problems. This paper investigates three different ways that FC can be incorporated into PINO by testing their performance on a 1D blowup problem. Experiments show that FC-PINO outperforms padded PINO, improving equation loss by several orders of magnitude, and it can accurately capture the third order derivatives of nonsmooth solution functions.
translated by 谷歌翻译
Recently, neural networks have proven their impressive ability to solve partial differential equations (PDEs). Among them, Fourier neural operator (FNO) has shown success in learning solution operators for highly non-linear problems such as turbulence flow. FNO is discretization-invariant, where it can be trained on low-resolution data and generalizes to problems with high-resolution. This property is related to the low-pass filters in FNO, where only a limited number of frequency modes are selected to propagate information. However, it is still a challenge to select an appropriate number of frequency modes and training resolution for different PDEs. Too few frequency modes and low-resolution data hurt generalization, while too many frequency modes and high-resolution data are computationally expensive and lead to over-fitting. To this end, we propose Incremental Fourier Neural Operator (IFNO), which augments both the frequency modes and data resolution incrementally during training. We show that IFNO achieves better generalization (around 15% reduction on testing L2 loss) while reducing the computational cost by 35%, compared to the standard FNO. In addition, we observe that IFNO follows the behavior of implicit regularization in FNO, which explains its excellent generalization ability.
translated by 谷歌翻译
State estimation is important for a variety of tasks, from forecasting to substituting for unmeasured states in feedback controllers. Performing real-time state estimation for PDEs using provably and rapidly converging observers, such as those based on PDE backstepping, is computationally expensive and in many cases prohibitive. We propose a framework for accelerating PDE observer computations using learning-based approaches that are much faster while maintaining accuracy. In particular, we employ the recently-developed Fourier Neural Operator (FNO) to learn the functional mapping from the initial observer state and boundary measurements to the state estimate. By employing backstepping observer gains for previously-designed observers with particular convergence rate guarantees, we provide numerical experiments that evaluate the increased computational efficiency gained with FNO. We consider the state estimation for three benchmark PDE examples motivated by applications: first, for a reaction-diffusion (parabolic) PDE whose state is estimated with an exponential rate of convergence; second, for a parabolic PDE with exact prescribed-time estimation; and, third, for a pair of coupled first-order hyperbolic PDEs that modeling traffic flow density and velocity. The ML-accelerated observers trained on simulation data sets for these PDEs achieves up to three orders of magnitude improvement in computational speed compared to classical methods. This demonstrates the attractiveness of the ML-accelerated observers for real-time state estimation and control.
translated by 谷歌翻译
深度学习替代模型已显示出在解决部分微分方程(PDE)方面的希望。其中,傅立叶神经操作员(FNO)达到了良好的准确性,并且与数值求解器(例如流体流量)上的数值求解器相比要快得多。但是,FNO使用快速傅立叶变换(FFT),该变换仅限于具有均匀网格的矩形域。在这项工作中,我们提出了一个新框架,即Geo-Fno,以解决任意几何形状的PDE。 Geo-FNO学会将可能不规则的输入(物理)结构域变形为具有均匀网格的潜在空间。具有FFT的FNO模型应用于潜在空间。所得的GEO-FNO模型既具有FFT的计算效率,也具有处理任意几何形状的灵活性。我们的Geo-FNO在其输入格式,,即点云,网格和设计参数方面也很灵活。我们考虑了各种PDE,例如弹性,可塑性,Euler和Navier-Stokes方程,以及正向建模和逆设计问题。与标准数值求解器相比,与标准数值求解器相比,Geo-fno的价格比标准数值求解器快两倍,与在现有基于ML的PDE求解器(如标准FNO)上进行直接插值相比,Geo-fno更准确。
translated by 谷歌翻译
已经对机器学习技术进行了广泛的研究,以实现掩盖优化问题,旨在提高掩模的可打印性,更短的周转时间,更好的遮罩制造性等。但是,这些研究中的大多数都集中在小型设计区域的初始解决方案生成上。为了进一步实现机器学习技术在面罩优化任务上的潜力,我们提出了一个卷积傅立叶神经操作员(CFNO),该神经操作员(CFNO)可以有效地学习布局瓷砖依赖性,从而有望使用有限的遗产工具干预,并有望使用无针迹的大规模掩蔽优化。我们在解决非凸优化问题时通过训练有素的机器学习模型发现了岩石引导的自我训练(LGST)的可能性,从而允许迭代模型和数据集更新并带来显着的模型性能改进。实验结果表明,我们基于机器学习的框架首次优于最先进的学术数值掩码优化器,并具有速度级的速度。
translated by 谷歌翻译
视觉变压器在代表学习中提供了巨大的成功。这主要是由于通过自我关注混合的有效令牌。然而,这与像素的数量相当缩放,这对于高分辨率输入而变得不可行。为了应对这一挑战,我们将自适应傅里叶神经运营商(AFNO)提出为一个有效的令牌混合器,学习在傅立叶域中混合。 AFNO基于经营者学习的主要基础,这使我们可以将令牌混合作为连续的全局卷积,而无需任何对输入分辨率的依赖性。这一原则以前用于设计FNO,它在傅立叶域中有效地解决了全球卷积,并在学习挑战PDE时显示了承诺。为了处理视觉表现的挑战,例如图像和高分辨率输入中的不连续性,我们向FNO提出了原则的架构修改,从而导致内存和计算效率。这包括在信道混合重量上施加块对角线结构,通过软阈值和收缩来自适应地共享令牌的权重,并缩小频率模式。得到的模型与准线性复杂度高度平行,并且序列大小具有线性存储器。在效率和准确性方面,AFNO优于几次拍摄分割的自我关注机制。对于Segformer-B3骨架的城市景观分割,AFNO可以处理65K的序列大小,优于其他有效的自我关注机制。
translated by 谷歌翻译
准确和高效的点云注册是一个挑战,因为噪音和大量积分影响了对应搜索。这一挑战仍然是一个剩余的研究问题,因为大多数现有方法都依赖于对应搜索。为了解决这一挑战,我们通过调查深生成的神经网络来点云注册来提出新的数据驱动登记算法。给定两个点云,动机是直接生成对齐的点云,这在许多应用中非常有用,如3D匹配和搜索。我们设计了一个端到端的生成神经网络,用于对齐点云生成以实现这种动机,包含三种新组件。首先,提出了一种点多感知层(MLP)混频器(PointMixer)网络以便在自点云中有效地维护全局和局部结构信息。其次,提出了一种特征交互模块来融合来自交叉点云的信息。第三,提出了一种并行和差分样本共识方法来基于所生成的登记结果计算输入点云的变换矩阵。所提出的生成神经网络通过维持数据分布和结构相似度,在GAN框架中训练。 ModelNet40和7Scene数据集的实验表明,所提出的算法实现了最先进的准确性和效率。值得注意的是,与基于最先进的对应的算法相比,我们的方法减少了注册错误(CD)的$ 2 \次数为$ 12 \倍运行时间。
translated by 谷歌翻译
机器学习方法最近在求解部分微分方程(PDE)中的承诺。它们可以分为两种广泛类别:近似解决方案功能并学习解决方案操作员。物理知识的神经网络(PINN)是前者的示例,而傅里叶神经操作员(FNO)是后者的示例。这两种方法都有缺点。 Pinn的优化是具有挑战性,易于发生故障,尤其是在多尺度动态系统上。 FNO不会遭受这种优化问题,因为它在给定的数据集上执行了监督学习,但获取此类数据可能太昂贵或无法使用。在这项工作中,我们提出了物理知识的神经运营商(Pino),在那里我们结合了操作学习和功能优化框架。这种综合方法可以提高PINN和FNO模型的收敛速度和准确性。在操作员学习阶段,Pino在参数PDE系列的多个实例上学习解决方案操作员。在测试时间优化阶段,Pino优化预先训练的操作员ANSATZ,用于PDE的查询实例。实验显示Pino优于许多流行的PDE家族的先前ML方法,同时保留与求解器相比FNO的非凡速度。特别是,Pino准确地解决了挑战的长时间瞬态流量,而其他基线ML方法无法收敛的Kolmogorov流程。
translated by 谷歌翻译
神经网络的经典发展主要集中在有限维欧基德空间或有限组之间的学习映射。我们提出了神经网络的概括,以学习映射无限尺寸函数空间之间的运算符。我们通过一类线性积分运算符和非线性激活函数的组成制定运营商的近似,使得组合的操作员可以近似复杂的非线性运算符。我们证明了我们建筑的普遍近似定理。此外,我们介绍了四类运算符参数化:基于图形的运算符,低秩运算符,基于多极图形的运算符和傅里叶运算符,并描述了每个用于用每个计算的高效算法。所提出的神经运营商是决议不变的:它们在底层函数空间的不同离散化之间共享相同的网络参数,并且可以用于零击超分辨率。在数值上,与现有的基于机器学习的方法,达西流程和Navier-Stokes方程相比,所提出的模型显示出卓越的性能,而与传统的PDE求解器相比,与现有的基于机器学习的方法有关的基于机器学习的方法。
translated by 谷歌翻译
众所周知,混乱的系统对预测的挑战是挑战,因为它们对时间的敏感性和由于阶梯时间而引起的错误和错误。尽管这种不可预测的行为,但对于许多耗散系统,长期轨迹的统计数据仍受到一套被称为全球吸引子的不变措施的管辖。对于许多问题,即使状态空间是无限的维度,该集合是有限维度的。对于马尔可夫系统,长期轨迹的统计特性由解决方案操作员唯一确定,该解决方案操作员将系统的演变映射到任意正时间增量上。在这项工作中,我们提出了一个机器学习框架,以学习耗散混沌系统的基础解决方案操作员,这表明所得的学习操作员准确地捕获了短期轨迹和长期统计行为。使用此框架,我们能够预测湍流Kolmogorov流动动力学的各种统计数据,雷诺数为5000。
translated by 谷歌翻译